29 research outputs found

    Tacsel: Shape-Changing Tactile Screen applied for Eyes-Free Interaction in Cockpit

    Get PDF
    International audienceTouch screens have become widely used in recent years. Nowadays they have been integrated on numerous electronic devices for common use since they allow the user to interact with what is displayed on the screen. However, these technologies cannot be used in complex systems in which the visual attention is very limited (cockpit manipulation, driving tasks, etc.). This paper introduces the concept of Tacsel, the smaller dynamic element of a tactile screen. Tacsels allow shape-changing and flexible properties to touch screen devices providing eyes-free interaction. We developed a high-resolution prototype of Tacsel to demonstrate its technical feasibility and its potential within a cockpit context. Three interaction scenarios are described and a workshop with brainstorming and video-prototyping is conducted to evaluate the use of the proposed Tacsel in several cockpit tasks. Results showed that interactive Tacsels have a real potential for future cockpits. Several other possible applications are also described, and several advantages and limitations are discussed

    Une étude sur la prise en compte simultanée de deux modalités pour la reconnaissance de gestes de SoundPainting

    Get PDF
    National audienceNowadays, gestures are being adopted as a new modality in the field of Human-Computer Interaction (HMI), where the physical movements of the whole body can perform unlimited actions. Soundpainting is a language of artistic composition used for more than forty years. However, the work on the recognition of SoundPainting gestures is limited and they do not take into account the movements of the fingers and the hand in the gestures which constitute an essential part of SoundPainting. In this context, we con- ducted a study to explore the combination of 3D postures and muscle activity for the recognition of SoundPainting gestures. In order to carry out this study, we created a Sound- Painting database of 17 gestures with data from two sensors (Kinect and Myo). We formulated four hypotheses concerning the accuracy of recognition. The results allowed to characterize the best sensor according to the typology of the gesture, to show that a "simple" combination of the two sensors does not necessarily improves the recognition, that a combination of features is not necessarily more efficient than taking into account a single well-chosen feature, finally, that changing the frequency of the data acquisition provided by these sensors does not have a significant impact on the recognition of gestures.Actuellement, les gestes sont adoptés comme une nouvelle modalité dans le domaine de l'interaction homme-machine, où les mouvements physiques de tout le corps peuvent effectuer des actions quasi-illimitées. Le Soundpainting est un langage de composition artistique utilisé depuis plus de quarante ans. Pourtant, les travaux sur la reconnaissance des gestes SoundPainting sont limités et ils ne prennent pas en compte les mouvements des doigts et de la main dans les gestes qui constituent une partie essentielle de SoundPainting. Dans ce contexte, nous avons réalisé une étude pour explorer la combinaison de postures 3D et de l'activité musculaire pour la reconnaissance des gestes SoundPainting. Pour réaliser cette étude, nous avons créé une base de données SoundPainting de 17 gestes avec les données provenant de deux capteurs (Kinect et Myo). Nous avons formulé quatre hypothèses portant sur la précision de la reconnaissance. Les résultats ont permis de caractériser le meilleur capteur en fonction de la typologie du geste, de montrer qu'une "simple" combinaison des deux capteurs n'entraîne pas forcément une amélioration de la reconnaissance, de même une combinaisons de caractéristiques n'est pas forcément plus performante que la prise en compte d'une seule caractéristique bien choisie, enfin, que le changement de la cadence d'acquisition des données fournies par ces capteurs n'a pas un impact significatif sur la reconnaissance des gestes

    Toward "Pseudo-Haptic Avatars": Modifying the Visual Animation of Self-Avatar Can Simulate the Perception of Weight Lifting

    Get PDF
    International audienceIn this paper we study how the visual animation of a self-avatar can be artificially modified in real-time in order to generate different haptic perceptions. In our experimental setup, participants could watch their self-avatar in a virtual environment in mirror mode while performing a weight lifting task. Users could map their gestures on the self-animated avatar in real-time using a Kinect. We introduce three kinds of modification of the visual animation of the self-avatar according to the effort delivered by the virtual avatar: 1) changes on the spatial mapping between the user's gestures and the avatar, 2) different motion profiles of the animation, and 3) changes in the posture of the avatar (upper-body inclination). The experimental task consisted of a weight lifting task in which participants had to order four virtual dumbbells according to their virtual weight. The user had to lift each virtual dumbbells by means of a tangible stick, the animation of the avatar was modulated according to the virtual weight of the dumbbell. The results showed that the altering the spatial mapping delivered the best performance. Nevertheless, participants globally appreciated all the different visual effects. Our results pave the way to the exploitation of such novel techniques in various VR applications such as sport training, exercise games, or industrial training scenarios in single or collaborative mode

    Elastic Images: Perceiving Local Elasticity of Images Through a Novel Pseudo-Haptic Deformation Effect

    Get PDF
    International audienceWe introduce the Elastic Images, a novel pseudo-haptic feedback technique which enables the perception of the local elasticity of images without the need of any haptic device. The proposed approach focus on whether visual feedback is able to induce a sensation of stiffness when the user interacts with an image using a standard mouse. The user, when clicking on a Elastic Image, is able to deform it locally according to its elastic properties. To reinforce the effect, we also propose the generation of procedural shadows and creases to simulate the compressibility of the image and several mouse cursors replacements to enhance pressure and stiffness perception. A psychophysical experiment was conducted to quantify this novel pseudo-haptic perception and determine its perceptual threshold (or its Just Noticeable Difference). The results showed that users were able to recognize up to eight different stiffness values with our proposed method and confirmed that it provides a perceivable and exploitable sensation of elasticity. The potential applications of the proposed approach range from pressure sensing in product catalogs and games, or its usage in graphical user interfaces for increasing the expressiveness of widgets

    An empirical evaluation of generative adversarial nets in synthesizing X-ray chest images

    Get PDF
    In the last decade, Generative Adversarial Nets (GAN) have become a subject of growing interest in multiple research fields. In this paper, we focus on applications in the medical field by attempting to generate realistic X-ray chest images. A heuristic approach is adopted to perform an extensive evaluation of different architecture configurations and optimization algorithms and we propose an optimal configuration of the baseline Deep Convolutional GAN (DCGAN) based on empirical findings. Additionally, we highlight the technical limitations of GAN and provide an analysis of the high memory requirements, which we reduce by a range of 1.2-7 percent by removing unnecessary layers. Finally, we verify that the loss of the discriminator can be used as an assessment metric

    Is virtual reality the solution? A comparison between 3D and 2D creative sketching tools in the early design process

    Get PDF
    Creativity is key in the early phases of innovation processes. With the rapid evolution of technologies, designers now have access to various tools to support this activity. Virtual reality (VR) takes over multiple domains, especially during conception. However, is VR really facilitating creativity in the initial ideation phases? We compare two sketching modalities through dedicated creativity support tools (CSTs): one in VR and one on a 2D interactive whiteboard. We propose a two-part creativity task (divergent and convergent thinking) for two groups of 30 participants each. We record user experience, creative experience, and creative performance. Our results show that VR is more stimulating, attractive, and engaging. We also observe a better level of creativity for the participants using the VR CST. Our results indicate that VR is an effective and relevant tool to boost creativity and that this effect might carry over to following creative tasks

    Robot Mirroring: Promoting Empathy with an Artificial Agent by Reflecting the User’s Physiological Affective States

    Get PDF
    Self-tracking aims to increase awareness, decrease undesired behaviors, and ultimately lead towards a healthier lifestyle. However, inappropriate communication of selftracking results might cause the opposite effect. Subtle selftracking feedback is an alternative that can be provided with the aid of an artificial agent representing the self. Hence, we propose a wearable pet that reflects the user’s affective states through visual and haptic feedback. By eliciting empathy and fostering helping behaviors towards it, users would indirectly help themselves. A wearable prototype was built, and three user studies performed to evaluate the appropriateness of the proposed affective representations. Visual representations using facial and body cues were clear for valence and less clear for arousal. Haptic interoceptive patterns emulating heart-rate levels matched the desired feedback urgency levels with a saturation frequency. The integrated visuo-haptic representations matched to participants own affective experience. From the results, we derived three design guidelines for future robot mirroring wearable systems: physical embodiment, interoceptive feedback, and customization

    Precariedad, exclusión social y diversidad funcional (discapacidad): lógicas y efectos subjetivos del sufrimiento social contemporáneo (III). Innovación docente en Filosofía

    Get PDF
    El PIMCD Precariedad, exclusión social y diversidad funcional (discapacidad): lógicas y efectos subjetivos del sufrimiento social contemporáneo (III). Innovación docente en Filosofía se ocupa de conceptos que generalmente han tendido a ser eludidos en la enseñanza académica de filosofía. Se trata de la tercera edición de un PIMCD que ha venido recibiendo financiación en las últimas convocatorias PIMCD UCM, de los que se han derivado publicaciones colectivas publicadas por Ediciones Complutense y Siglo XXI

    The importance of privacy and ethics in emotion recognition and emotion elicitation in Virtual Reality scenarios

    No full text
    Affective Computing is the study and development of systems that can automatically recognize, simulate and elicit emotions to users (Picard, 2000). It has been applied in several areas such as education, security, healthcare and entertainment (Daily et al., 2017; Gross & Levenson, 1995). In most of the studies, emotion recognition and elicitation has been carried out with non-immersive environments (Aharonson & Amir, 2006; Faita et al., 2016). Virtual Reality (VR) is defined as an artificial environment created with computer hardware and software and presented to the user in such a way that it appears and feels like a real environment (Aukstakalnis et al., 1992). It provides simulated experiences increasing the feeling of immersion or presence experienced by the user (Okechukwu & Udoka, 2011). In past years, even recent, affective interactions using VR have been analyzed in studies which demonstrate that virtual environments can be used to recognize or elicit emotions in the user such as relaxation, stress or anxiety (Marín-Morales et al., 2020; Riva et al., 2007). In addition, some works show that emotional content increases the sense of immersion in a virtual environment (Gorini et al., 2011; Marín-Morales et al., 2018). However, regardless of the various studies conducted, the success of emotion recognition in virtual reality scenarios will depend on how safe can be made and how the privacy of users can be protected and respected (Cowie, 2014). In addition, users must be able to trust the virtual reality scenario knowing that their emotional states can be recognized and elicited.The purpose of this panel is to address several points regarding ethics and privacy issues, risks, challenges and opportunities about the impact that emotion recognition and elicitation can achieve in virtual reality scenarios
    corecore